43 research outputs found

    Formal Analysis of SPDM: Security Protocol and Data Model version 1.2

    Get PDF
    DMTF is a standards organization by major industry players in IT infrastructure including AMD, Alibaba, Broadcom, Cisco, Dell, Google, Huawei, IBM, Intel, Lenovo, and NVIDIA, which aims to enable interoperability, e.g., including cloud, virtualization, network, servers and storage. It is currently standardizing a security protocol called SPDM, which aims to secure communication over the wire and to enable device attestation, notably also explicitly catering for communicating hardware components. The SPDM protocol inherits requirements and design ideas from IETF’s TLS 1.3. However, its state machines and transcript handling are substantially different and more complex. While architecture, specification, and open-source libraries of the current versions of SPDM are publicly available, these include no significant security analysis of any kind. In this work we develop the first formal models of the three modes of the SPDM protocol version 1.2.1, and formally analyze their main security propertie

    How many qubits are needed for quantum computational supremacy?

    Get PDF
    Quantum computational supremacy arguments, which describe a way for a quantum computer to perform a task that cannot also be done by a classical computer, typically require some sort of computational assumption related to the limitations of classical computation. One common assumption is that the polynomial hierarchy (PH) does not collapse, a stronger version of the statement that P \neq NP, which leads to the conclusion that any classical simulation of certain families of quantum circuits requires time scaling worse than any polynomial in the size of the circuits. However, the asymptotic nature of this conclusion prevents us from calculating exactly how many qubits these quantum circuits must have for their classical simulation to be intractable on modern classical supercomputers. We refine these quantum computational supremacy arguments and perform such a calculation by imposing fine-grained versions of the non-collapse assumption. Each version is parameterized by a constant aa and asserts that certain specific computational problems with input size nn require 2an2^{an} time steps to be solved by a non-deterministic algorithm. Then, we choose a specific value of aa for each version that we argue makes the assumption plausible, and based on these conjectures we conclude that Instantaneous Quantum Polynomial-Time (IQP) circuits with 208 qubits, Quantum Approximate Optimization Algorithm (QAOA) circuits with 420 qubits and boson sampling circuits (i.e. linear optical networks) with 98 photons are large enough for the task of producing samples from their output distributions up to constant multiplicative error to be intractable on current technology. In the first two cases, we extend this to constant additive error by introducing an average-case fine-grained conjecture.Comment: 24 pages + 3 appendices, 8 figures. v2: number of qubits calculation updated and conjectures clarified after becoming aware of Ref. [42]. v3: Section IV and Appendix C added to incorporate additive-error simulation

    Evaluación del rendimiento y calidad del destilado al vacío de aceite lubricante usado

    Get PDF
    El presente trabajo de investigación tiene como finalidad recuperar la base lubricante presente en los aceites lubricantes usados (ALU) mediante el proceso de destilación al vacío, ya que, los ALU al ser desechados sin previo tratamiento provocan severos daños ambientales y afecta a la salud pública debido a su contenido de hidrocarburos y metales pesados, por ello se trazó como objetivo general evaluar la efectividad del proceso de destilación al vacío de los aceites lubricantes usados en el rendimiento y como objetivos específicos evaluar la influencia de la presión y temperatura sobre el rendimiento, para posteriormente evaluar las propiedades físicas básicas del destilado del aceite lubricante usado. Para realizar el proyecto de investigación se tomó muestras de ALU de diferentes lubricentros de la ciudad del Cusco, los cuales fueron homogenizados y puestos en reposo durante 24 horas, posterior a este paso se procedió a destilar muestras de 100 ml de ALU a presiones de 48.54mmHg, 68.54mmHg y 88.53mmHg y se observó el comportamiento a cortes de temperaturas de 350 ºC , 365 ºC y 380 ºC , logrando recuperar la base lubricante presente en los ALU con un rendimiento total que oscila entre los 86% – 90%. Posteriormente se evaluó la influencia de las variables entrada sobre las variables de respuesta mediante un análisis de varianza, de esta forma se logró determinar que la temperatura tiene mayor influencia en la cantidad recuperada de base lubricante

    Semi-Supervised Learning from Street-View Images and OpenStreetMap for Automatic Building Height Estimation

    Get PDF
    Accurate building height estimation is key to the automatic derivation of 3D city models from emerging big geospatial data, including Volunteered Geographical Information (VGI). However, an automatic solution for large-scale building height estimation based on low-cost VGI data is currently missing. The fast development of VGI data platforms, especially OpenStreetMap (OSM) and crowdsourced street-view images (SVI), offers a stimulating opportunity to fill this research gap. In this work, we propose a semi-supervised learning (SSL) method of automatically estimating building height from Mapillary SVI and OSM data to generate low-cost and open-source 3D city modeling in LoD1. The proposed method consists of three parts: first, we propose an SSL schema with the option of setting a different ratio of "pseudo label" during the supervised regression; second, we extract multi-level morphometric features from OSM data (i.e., buildings and streets) for the purposed of inferring building height; last, we design a building floor estimation workflow with a pre-trained facade object detection network to generate "pseudo label" from SVI and assign it to the corresponding OSM building footprint. In a case study, we validate the proposed SSL method in the city of Heidelberg, Germany and evaluate the model performance against the reference data of building heights. Based on three different regression models, namely Random Forest (RF), Support Vector Machine (SVM), and Convolutional Neural Network (CNN), the SSL method leads to a clear performance boosting in estimating building heights with a Mean Absolute Error (MAE) around 2.1 meters, which is competitive to state-of-the-art approaches. The preliminary result is promising and motivates our future work in scaling up the proposed method based on low-cost VGI data, with possibilities in even regions and areas with diverse data quality and availability

    Automated Analysis of Protocols that use Authenticated Encryption: How Subtle AEAD Differences can impact Protocol Security

    Get PDF
    Many modern security protocols such as TLS, WPA2, WireGuard, and Signal use a cryptographic primitive called Authenticated Encryption (optionally with Authenticated Data), also known as an AEAD scheme. AEAD is a variant of symmetric encryption that additionally provides authentication. While authentication may seem to be a straightforward additional requirement, it has in fact turned out to be complex: many different security notions for AEADs are still being proposed, and several recent protocol-level attacks exploit subtle behaviors that differ among real-world AEAD schemes. We provide the first automated analysis method for protocols that use AEADs that can systematically find attacks that exploit the subtleties of the specific type of AEAD used. This can then be used to analyze specific protocols with a fixed AEAD choice, or to provide guidance on which AEADs might be (in)sufficient to make a protocol design secure. We develop generic symbolic AEAD models, which we instantiate for the Tamarin prover. Our approach can automatically and efficiently discover protocol attacks that could previously only be found using manual inspection, such as the Salamander attack on Facebook’s message franking, and attacks on SFrame and YubiHSM. Furthermore, our analysis reveals undesirable behaviors of several other protocols

    How to wrap it up - A formally verified proposal for the use of authenticated wrapping in PKCS#11

    Get PDF
    Being the most widely used and comprehensive standard for hardware security modules, cryptographic tokens and smart cards, PKCS#11 has been the subject of academic study for years. PKCS#11 provides a key store that is separate from the application, so that, ideally, an application never sees a key in the clear. Again and again, researchers have pointed out the need for an import/export mechanism that ensures the integrity of the permissions associated to a key. With version 2.40, for the first time, the standard included authenticated deterministic encryption schemes. The interface to this operation is insecure, however, so that an application can get the key in the clear, subverting the purpose of using a hardware security module. This work proposes a formal model for the secure use of authenticated deterministic encryption in PKCS#11, including concrete API changes to allow for secure policies to be implemented. Owing to the authenticated encryption mechanism, the policy we propose provides more functionality than any policy proposed so far and can be implemented without access to a random number generator. Our results cover modes of operation that rely on unique initialisation vectors (IVs), like GCM or CCM, but also modes that generate synthetic IVs. We furthermore provide a proof for the deduction soundness of our modelling of deterministic encryption in Böhl et.al.'s composable deduction soundness framework

    Automated Analysis of Protocols that use Authenticated Encryption: How Subtle AEAD Differences can impact Protocol Security

    Get PDF
    Many modern security protocols such as TLS, WPA2, WireGuard, and Signal use a cryptographic primitive called Authenticated Encryption (optionally with Authenticated Data), also known as an AEAD scheme. AEAD is a variant of symmetric encryption that additionally provides authentication. While authentication may seem to be a straightforward additional requirement, it has in fact turned out to be complex: many different security notions for AEADs are still being proposed, and several recent protocol-level attacks exploit subtle behaviors that differ among real-world AEAD schemes. We provide the first automated analysis method for protocols that use AEADs that can systematically find attacks that exploit the subtleties of the specific type of AEAD used. This can then be used to analyze specific protocols with a fixed AEAD choice, or to provide guidance on which AEADs might be (in)sufficient to make a protocol design secure. We develop generic symbolic AEAD models, which we instantiate for the Tamarin prover. Our approach can automatically and efficiently discover protocol attacks that could previously only be found using manual inspection, such as the Salamander attack on Facebook’s message franking, and attacks on SFrame and YubiHSM. Furthermore, our analysis reveals undesirable behaviors of several other protocols

    Hash Gone Bad: Automated discovery of protocol attacks that exploit hash function weaknesses

    Get PDF
    Most cryptographic protocols use cryptographic hash functions as a building block. The security analyses of these protocols typically assume that the hash functions are perfect (such as in the random oracle model). However, in practice, most widely deployed hash functions are far from perfect -- and as a result, the analysis may miss attacks that exploit the gap between the model and the actual hash function used. We develop the first methodology to systematically discover attacks on security protocols that exploit weaknesses in widely deployed hash functions. We achieve this by revisiting the gap between theoretical properties of hash functions and the weaknesses of real-world hash functions, from which we develop a lattice of threat models. For all of these threat models, we develop fine-grained symbolic models. Our methodology's fine-grained models cannot be directly encoded in existing state-of-the-art analysis tools by just using their equational reasoning. We therefore develop extensions for the two leading tools, Tamarin and Proverif. In extensive case studies using our methodology, the extended tools rediscover all attacks that were previously reported for these protocols and discover several new variants

    Shared Pattern of Endocranial Shape Asymmetries among Great Apes, Anatomically Modern Humans, and Fossil Hominins

    Get PDF
    Anatomical asymmetries of the human brain are a topic of major interest because of their link with handedness and cognitive functions. Their emergence and occurrence have been extensively explored in human fossil records to document the evolution of brain capacities and behaviour. We quantified for the first time antero-posterior endocranial shape asymmetries in large samples of great apes, modern humans and fossil hominins through analysis of “virtual” 3D models of skull and endocranial cavity and we statistically test for departures from symmetry. Once based on continuous variables, we show that the analysis of these brain asymmetries gives original results that build upon previous analysis based on discrete traits. In particular, it emerges that the degree of petalial asymmetries differs between great apes and hominins without modification of their pattern. We indeed demonstrate the presence of shape asymmetries in great apes, with a pattern similar to modern humans but with a lower variation and a lower degree of fluctuating asymmetry. More importantly, variations in the position of the frontal and occipital poles on the right and left hemispheres would be expected to show some degree of antisymmetry when population distribution is considered, but the observed pattern of variation among the samples is related to fluctuating asymmetry for most of the components of the petalias. Moreover, the presence of a common pattern of significant directional asymmetry for two components of the petalias in hominids implicates that the observed traits were probably inherited from the last common ancestor of extant African great apes and Homo sapiens

    Another Shipment of Six Short-Period Giant Planets from TESS

    Full text link
    We present the discovery and characterization of six short-period, transiting giant planets from NASA's Transiting Exoplanet Survey Satellite (TESS) -- TOI-1811 (TIC 376524552), TOI-2025 (TIC 394050135), TOI-2145 (TIC 88992642), TOI-2152 (TIC 395393265), TOI-2154 (TIC 428787891), & TOI-2497 (TIC 97568467). All six planets orbit bright host stars (8.9 <G< 11.8, 7.7 <K< 10.1). Using a combination of time-series photometric and spectroscopic follow-up observations from the TESS Follow-up Observing Program (TFOP) Working Group, we have determined that the planets are Jovian-sized (RP_{P} = 1.00-1.45 RJ_{J}), have masses ranging from 0.92 to 5.35 MJ_{J}, and orbit F, G, and K stars (4753 << Teff_{eff} << 7360 K). We detect a significant orbital eccentricity for the three longest-period systems in our sample: TOI-2025 b (P = 8.872 days, ee = 0.220±0.0530.220\pm0.053), TOI-2145 b (P = 10.261 days, ee = 0.1820.049+0.0390.182^{+0.039}_{-0.049}), and TOI-2497 b (P = 10.656 days, ee = 0.1960.053+0.0590.196^{+0.059}_{-0.053}). TOI-2145 b and TOI-2497 b both orbit subgiant host stars (3.8 << log\log g <<4.0), but these planets show no sign of inflation despite very high levels of irradiation. The lack of inflation may be explained by the high mass of the planets; 5.350.35+0.325.35^{+0.32}_{-0.35} MJ_{\rm J} (TOI-2145 b) and 5.21±0.525.21\pm0.52 MJ_{\rm J} (TOI-2497 b). These six new discoveries contribute to the larger community effort to use {\it TESS} to create a magnitude-complete, self-consistent sample of giant planets with well-determined parameters for future detailed studies.Comment: 20 Pages, 6 Figures, 8 Tables, Accepted by MNRA
    corecore